Goto

Collaborating Authors

 Manhattan Beach


Protest at synagogue in Koreatown ends in arrests, hate accusations

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. The Audrey Irmas Pavilion, left, at the Wilshire Boulevard Temple, center in background, in 2021. This is read by an automated voice. Please report any issues or inconsistencies here . Two were arrested during a pro-Palestinian protest at Wilshire Boulevard Temple that ended in confrontation.




Question-Answer Sentence Graph for Joint Modeling Answer Selection

Iyer, Roshni G., Vu, Thuy, Moschitti, Alessandro, Sun, Yizhou

arXiv.org Artificial Intelligence

This research studies graph-based approaches for Answer Sentence Selection (AS2), an essential component for retrieval-based Question Answering (QA) systems. During offline learning, our model constructs a small-scale relevant training graph per question in an unsupervised manner, and integrates with Graph Neural Networks. Graph nodes are question sentence to answer sentence pairs. We train and integrate state-of-the-art (SOTA) models for computing scores between question-question, question-answer, and answer-answer pairs, and use thresholding on relevance scores for creating graph edges. Online inference is then performed to solve the AS2 task on unseen queries. Experiments on two well-known academic benchmarks and a real-world dataset show that our approach consistently outperforms SOTA QA baseline models.


Acceleration of Subspace Learning Machine via Particle Swarm Optimization and Parallel Processing

Fu, Hongyu, Yang, Yijing, Liu, Yuhuai, Lin, Joseph, Harrison, Ethan, Mishra, Vinod K., Kuo, C. -C. Jay

arXiv.org Artificial Intelligence

Built upon the decision tree (DT) classification and regression idea, the subspace learning machine (SLM) has been recently proposed to offer higher performance in general classification and regression tasks. Its performance improvement is reached at the expense of higher computational complexity. In this work, we investigate two ways to accelerate SLM. First, we adopt the particle swarm optimization (PSO) algorithm to speed up the search of a discriminant dimension that is expressed as a linear combination of current dimensions. The search of optimal weights in the linear combination is computationally heavy. It is accomplished by probabilistic search in original SLM. The acceleration of SLM by PSO requires 10-20 times fewer iterations. Second, we leverage parallel processing in the SLM implementation. Experimental results show that the accelerated SLM method achieves a speed up factor of 577 in training time while maintaining comparable classification/regression performance of original SLM.


Remote AR/VR openings in Boston on August 06, 2022

#artificialintelligence

Role requiring'No experience data provided' months of experience in Los Angeles Qualifications: • 3 years experience in enterprise product management • 1 years experience managing immersive (AR/VR) products specifically • Bachelor's degree in HCI or engineering (or equivalent on-the-job experience) • 3 years experience in a dynamic product management role • Proven experience overseeing all elements of the product development lifecycle • Highly effective cross-functional team management • Previous experience delivering finely-tuned product marketing strategies • Exceptional writing and editing skills combined with strong presentation and public speaking skills • Outstanding portfolio and professional references • Proven track record of delivering complex products to difficult markets! BadVR is the world's first immersive data visualization and analytics platform. BadVR brings data into high-definition, making it easier to discover and identify hidden problems and opportunities, helping businesses make better decisions, faster. Based in Manhattan Beach, CA the rapidly-growing tech startup has attracted industry attention with its pioneering AR and VR demos, allowing people to – quite literally – 'step inside their data.' Our product is already empowering users across America through our work with Magic Leap, UNDP, National Science Foundation, and more. But–we are just getting started!


The Morning After: Apple's Mac and Google's Pixel events, previewed

Engadget

Apple's second fall product event kicks off later today at 1 PM ET. We've laid out what to expect, but it's not the only big tech event week. Spare a thought for some of our staff, who will go straight from Apple reportage into Google. Yep, Tuesday October 19th is Google's Pixel 6 event. While we know what the phone will look like -- and some of its specifications -- expect to see some software surprises.


CDA: a Cost Efficient Content-based Multilingual Web Document Aligner

Vu, Thuy, Moschitti, Alessandro

arXiv.org Artificial Intelligence

We introduce a Content-based Document Alignment approach (CDA), an efficient method to align multilingual web documents based on content in creating parallel training data for machine translation (MT) systems operating at the industrial level. CDA works in two steps: (i) projecting documents of a web domain to a shared multilingual space; then (ii) aligning them based on the similarity of their representations in such space. We leverage lexical translation models to build vector representations using TF-IDF. CDA achieves performance comparable with state-of-the-art systems in the WMT-16 Bilingual Document Alignment Shared Task benchmark while operating in multilingual space. Besides, we created two web-scale datasets to examine the robustness of CDA in an industrial setting involving up to 28 languages and millions of documents. The experiments show that CDA is robust, cost-effective, and is significantly superior in (i) processing large and noisy web data and (ii) scaling to new and low-resourced languages.


Class-agnostic Object Detection

Jaiswal, Ayush, Wu, Yue, Natarajan, Pradeep, Natarajan, Premkumar

arXiv.org Machine Learning

Object detection models perform well at localizing and classifying objects that they are shown during training. However, due to the difficulty and cost associated with creating and annotating detection datasets, trained models detect a limited number of object types with unknown objects treated as background content. This hinders the adoption of conventional detectors in real-world applications like large-scale object matching, visual grounding, visual relation prediction, obstacle detection (where it is more important to determine the presence and location of objects than to find specific types), etc. We propose class-agnostic object detection as a new problem that focuses on detecting objects irrespective of their object-classes. Specifically, the goal is to predict bounding boxes for all objects in an image but not their object-classes. The predicted boxes can then be consumed by another system to perform application-specific classification, retrieval, etc. We propose training and evaluation protocols for benchmarking class-agnostic detectors to advance future research in this domain. Finally, we propose (1) baseline methods and (2) a new adversarial learning framework for class-agnostic detection that forces the model to exclude class-specific information from features used for predictions. Experimental results show that adversarial learning improves class-agnostic detection efficacy.


Functional Regularization for Representation Learning: A Unified Theoretical Perspective

Garg, Siddhant, Liang, Yingyu

arXiv.org Machine Learning

Unsupervised and self-supervised learning approaches have become a crucial tool to learn representations for downstream prediction tasks. While these approaches are widely used in practice and achieve impressive empirical gains, their theoretical understanding largely lags behind. Towards bridging this gap, we present a unifying perspective where several such approaches can be viewed as imposing a regularization on the representation via a learnable function using unlabeled data. We propose a discriminative theoretical framework for analyzing the sample complexity of these approaches, which generalizes the framework of (Balcan and Blum, 2010) to allow learnable regularization functions. Our sample complexity bounds show that, with carefully chosen hypothesis classes to exploit the structure in the data, these learnable regularization functions can prune the hypothesis space, and help reduce the amount of labeled data needed. We then provide two concrete examples of functional regularization, one using auto-encoders and the other using masked self-supervision, and apply our framework to quantify the reduction in the sample complexity bound of labeled data. We also provide complementary empirical results to support our analysis.